Type inference for correspondence types
We present a type and effect system for proving correspondence assertions in a Ï€-calculus with polarized channels, dependent pair types and effect terms. Given a process P and a type environment E, we describe how to generate constraints that are formulae in the Alternating Least Fixed-Point (ALFP) logic. A reasonable model of the generated constraints yields a type and effect assignment such that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a ï¬nite model property; a system of constraints is satisï¬able if and only if it has a ï¬nite model. As a consequence, we obtain the result that type and effect inference in our system is polynomial-time decidable.
That's a mouthful. The part of the paper that perked my virtual ears up:
Most importantly, our approach is general in nature; the encoding of types and terms does not depend on the rules of the type system. For this reason, our approach appears a natural candidate for obtaining similar type inference results for type systems such as [9], where correspondences concern formulas in an arbitrary authorization logic and the underlying process calculus includes cryptographic operations, and type systems for secrecy properties such as [12]. The possibility of such ramiï¬cations is currently under investigation.
[9] is A type discipline for authorization policies, which is followed up by Type-checking zero knowledge. The upshot is that it might be possible to have reasonable type inference support for a dependent type- and effect system with cryptographic operations supporting some of the most powerful privacy and security primitives and protocols currently known. I find this very exciting!
Formal proofs, especially in support of Mathematics, is not something that I work with - (I use a lot of intuition in analyzing my code). I found that the articles from The American Mathematical Society A Special Issue on Formal Proof are fairly good introductions to the subject of using Proof Assistants in Formalizing 100 Math Theorems.
Traditional mathematical proofs are written in a way to make them easily understood by mathematicians. Routine logical steps are omitted. An enormous amount of context is assumed on the part of the reader. Proofs, especially in topology and geometry, rely on intuitive arguments in situations where a trained mathematician would be capable of translating those intuitive arguments into a more rigorous argument.
A formal proof is a proof in which every logical inference has been checked all the way back to the fundamental axioms of mathematics. All the intermediate logical steps are supplied, without exception. No appeal is made to intuition, even if the translation from intuition to logic is routine. Thus, a formal proof is less intuitive, and yet less susceptible to logical errors.
Though the articles are focused on the application to mathematical proofs, they do give a background on languages that are continually mentioned on LtU (HOL, Coq, Isabelle, etc...).
Wolfram Schulte introduces us to the Research in Software Engineering (RiSE) group.
The RiSE group will start publishing weekly videos on Channel9. The videos will not only describe tools or technologies the researchers are working on, but they will also explain how the tools work through 'classroom' videos.
Screencast announcement at Channel9.
Page in Microsoft Research.
Yukiyoshi Kameyama, Oleg Kiselyov, Chung-chieh Shan, Shifting the Stage: Staging with Delimited Control. PEPM2009.
It is often hard to write programs that are ef?cient yet reusable. For example, an efficient implementation of Gaussian elimination should be specialized to the structure and known static properties of the input matrix. The most profitable optimizations, such as choosing the best pivoting or memoization, cannot be expected of even an advanced compiler because they are specific to the domain, but expressing these optimizations directly makes for ungainly source code. Instead, a promising and popular way to reconcile efficiency with reusability is for a domain expert to write code generators.
Two pillars of this approach are types and effects. Typed multilevel languages such as MetaOCaml ensure safety: a well-typed code generator neither goes wrong nor generates code that goes wrong. Side effects such as state and control ease correctness: an effectful generator can resemble the textbook presentation of an algorithm, as is familiar to domain experts, yet insert let for memoization and if for bounds-checking, as is necessary for efficiency. However, adding effects blindly renders multilevel types unsound...
Code generators have to be written in CPS or monadic style due to the need for let-insertion and if-insertion. This makes writing code generators unattractive to many programmers. The paper provides an alternative based on delimited continuations, and paper proposes a sound combination of staging and delimited
continuations.
The combination has been implemented. The authors show how pleasant it becomes to write interesting code generators (specializing dynamic programming and Gaussian elimination codes). The paper and the code for many generators is available online. I suggest reading the explanation of the problem, and then looking at the actual code generators.
Qi II has been released. Qi is a functional programming language built on top of Common Lisp. It has an optional static type system based on sequent calculus and a general focus on logic based programming. For version II, see the what's new page. Rule closures in particular look very interesting.
Unlike Qi I, Qi II allows you to embed sequent rules within functions, evaluating them to closures. These rule closures are type checked and are permeable to having their variables lexically bound from outside the scope of the rule itself. These devices enable the student of computational logic to effortlessly code complex logical systems of all descriptions. Thus the rule
let PTerm/X (replace-by X Term P)
PTerm/X, (all X P) >> Q;
____________________
(all X P) >> Q;
allows universally quantified assumptions to be instantiated to new premises. This rule can be embedded into a Qi II function called all-left which does precisely this job. The rule is turned into a closure by the rule function which is then applied to the problem (list of sequents).
(define all-left
{term --> [sequent] --> [sequent]}
Term S -> ((rule let PTerm/X (replace-by X Term P)
PTerm/X, (all X P) >> Q;
____________________
(all X P) >> Q;) S))
FPQi devotes a hundred pages to the exploration of this powerful construction.
Also with this release is a new book: Funcitonal Programming in Qi.
Earlier versions of Qi have been mentioned on LtU here and here.
While teaching INGI1131, my concurrent programming course, I have become even more impressed by a concurrent paradigm, namely functional programming extended with threads and ports, which I call multi-agent dataflow programming. This paradigm has many good properties:
- The declarative concurrent subset (no ports) has no race conditions and can be programmed like a functional language. The basic concept is dataflow synchronization of single-assignment variables. A useful data structure is the stream, a list with dataflow tail used as a communication channel.
- Nondeterminism can be added exactly where needed and minimally, by using ports. A port is simply a named stream to which any thread can send.
- All functional building blocks are concurrency patterns. Map, fold, filter, etc., are all useful for building concurrent programs. Here are two examples: a contract net protocol and an observable port object. Chapter 5 of CTM gives many more examples.
- Concurrent systems can be configured in any order and even concurrently with actual use of the system. Note that this is true for ports even though they can express nondeterminism.
- Designing concurrent programs is amazingly easy. For example, any declarative part of the program can be put in its own thread, loosening the coupling between system's parts, without changing correctness.
- The paradigm is easy to implement efficiently.
- The paradigm is easily extended to support fault-tolerant transparent distributed programming: see Raphael Collet's dissertation. Google's MapReduce is a famous example.
This paradigm seems to be exactly what is needed for both small and big parallel systems (both multicore and Internet, tight and loose coupling). I am surprised that it is not used more often. What do you think? Does it deserve a bigger role?
The When, Why and Why Not of the BETA Programming Language by Bent Bruun Kristensen, Ole Lehrmann Madsen, and Birger Møller-Pedersen from HOPL-III. BETA was an ambitious follow up to Simula - with orthogonality being a major design goal. The main things I found of interest are the attempts to create a unified abstraction pattern, the emphasis on modeling consistency between design and implementation, and the use of coroutines (ala Simula) for concurrency.
BETA is a programming language that has only one abstraction mechanism, the pattern, covering abstractions like record types, classes with methods, types with operations, methods, and functions. Specialization applies to patterns in general, thus providing a class/subclass mechanism for class patterns, a subtype mechanism for type patterns, and a specialization mechanism for methods and functions.
And while I'm at it, the original entry for HOPL-I on The Development of the SIMULA Languages by Kristen Nygaard and Ole-Johan Dahl is available (starts on page 3). SIMULA is one of a handful of most influential programming languages of all time. I found the following to be amusing:
In the spring of 1967 a new employee at the NCC in a very shocked voice told the switchboard operator: "Two men are fighting violently in front of the blackboard in the upstairs corridor. What shall we do?" The operator came out of her office, listened for a few seconds and then said: "Relax, it's only Dahl and Nygaard discussing SIMULA".
(Link to previous HOPL-III papers on LtU).
Lots of buzz surrounding Clojure these days. The main knock on Clojure has been the lack of tail call optimization. Because of the current lack of TCO on the JVM, Clojure currently doesn't support automated TCO - though the plan is that if the JVM ever adds the capability, Clojure will implement it at that point. The recur syntax is a way to mimic TCO for self-recursion. Now the trampoline for mutual recursion is available for more involved forms of recursion:
I've added trampoline to ease the conversion/creation of mutually
recursive algorithms in Clojure.
Trampolines are a well known technique for implementing TCO - instead
of actually making a call in the tail, a function returns the function
to be called, and a controlling loop calls it, thus allowing chained
execution without stack growth.
As an aside, I have been tinkering with the language and it seems well thought out. For a newbie like myself, the main difference maker is the promotion of common sequences (lists, vectors, maps, sets) to built-in first class collections making it comparable in ease of use to other scripting languages.
For those who like their PL History presented in avante guard beat poetry, a video of Steele & Gabriel's 50 in 50 speech at JAOO is made to order. Or as the link says:
A fun, artistic and enlightning presentation full of interesting facts - and who better to do it than Richard P. Gabriel and Guy L. Steele (the great Quux). Nothing more to say than the rallying cry; More cowbell!
Passing aside the Stephen Wright comic delivery of the two speakers, there are a lot of interesting thoughts, though very few are dwelled on. I think the most interesting things were the languages that they chose as expositions for the major ideas that they covered. Here's the ones that I picked out (though I ended up with only 49):
| Do Loops | Fortran (Pascal,APL) | Guarded Commands | Algol-68 | | Array Origin | C, Fortan, Pascal, APL | Extensible Language | PPL | | Domain Specific Language | APT | Structured Programming | BLISS, INTERCAL | | Text vs. Environment | Algol-60, Lisp, Smalltalk | Language as Educational Tool | Logo | | Stack Machines | Befunge (SECD Machine, Forth) | Formal Dynamic Semantics | SECD | | Data Parallelism | APL | Enumerated Types | Pascal | | Coercion | PL/I (Fortran-V) | Backtracking and Theorem Proving | Conniver (Prolog) | | Hierarchical Records | COBOL | Argument Handling | Common Lisp, Ada, Python (VB, C#, Suneido, PL/pgSQ) | | Pointers & Lists | IPL-V | Coding in Natural Language | Perligata (COBOL, Hypercard) | | Parsing | Yacc (LR1, Recursive Descent) | Computational Drama | Shakespeare | | Linked Records | AED | Reasoning | Prolog | | Mathematical Syntax | MADCAP, MIRFAC, Kleerer-May System | Type Declarators | C | | Line Numbers | Basic (Focal, APL) | Data Abstraction | CLU, Alphard | | Visual Languages | Piet | Dynamic vs. Lexical Scoping | Scheme | | Pattern Matching & Replacement | COMIT, SNOBOL | Knowledge Representation | KRL (Conniver, Microplanner) | | Branding | Ada (COMIT, SNOBOL, TRAC) | Stream Processing | Lucid | | Dynamic Languages | AMBIT/L | Generic Functions | Common Lisp | | Program as Data | Lisp | Reflection | 3-Lisp | | Macro Processor | TRAC, ML/I, Limp, M4 | Metacircular Interpreters | Lisp | | Call By Name vs. Call By Value | C, Algol-60 | Functional Programming | KRC | | Dangling Else | Algol-60 | Control Parallelism | Occam | | Formal Static Semantics | Algol-68 | Domain Specific Languages | HQ9+, MUMBLE | | Algebraic Formula Manipulation | Formac (Macsyma, Mathematica) | Build Languages | Make, Ant, Rake (JCL) | | Message Passing | Smalltalk (C++, C#, Java, Flavors, Common Loops, CLOS, Scheme, Dylan, Simula, Self) | Scripting | Perl | | Objects | Simula (Smalltalk, C++, Java) | | |
Type-Checking Zero Knowledge
This paper presents the ï¬rst type system for statically analyzing security protocols that are based on zero-knowledge proofs. We show how certain properties offered by zero-knowledge proofs can be characterized in terms of authorization policies and statically enforced by a type system. The analysis is modular and compositional, and provides security proofs for an unbounded number of protocol executions. We develop a new type-checker that conducts the analysis in a fully automated manner. We exemplify the applicability of our technique to real-world protocols by verifying the authenticity and secrecy properties of the Direct Anonymous Attestation (DAA) protocol. The analysis of DAA takes less than three seconds.
A little dependent type theory, a little Pi calculus, a little cryptography... there's something for everyone! This is follow-up work to this story that Ehud posted.
The source code can be found here.
|
Recent comments
2 weeks 2 days ago
2 weeks 3 days ago
2 weeks 4 days ago
2 weeks 4 days ago
3 weeks 2 days ago
3 weeks 2 days ago
3 weeks 2 days ago
6 weeks 3 days ago
7 weeks 1 day ago
7 weeks 1 day ago